专利摘要:
The invention relates to a method for determining the direction of an object in an image. It comprises a "field baptism" phase with: - acquisition by circular scanning by means of a first determined position imaging device, a series of partially overlapping scene images, - automatic extraction of descriptors defined by their image coordinates and their radiometric characteristics, with at least one unknown direction descriptor in each image overlap, - automatic estimation of the rotation of the images and mapping of the descriptors of the overlays, - identification in the images, of at least a known directional reference and determination of the image coordinates of each reference, - estimation of the attitude of each image and the position of the first imaging device, - calculation of the absolute directions of the descriptors, a phase of operation in line with: Acquiring at least one current image of the object from a second positional imaging device; iné, - Extraction of descriptors in each current image, - Matching descriptors of each current image with the descriptors whose absolute direction was calculated during the "baptismal field" phase, to determine the absolute direction of the descriptors of each image current, - estimation of the attitude of each current image, - calculation of the absolute direction of the object according to a predetermined model of shooting of each current image.
公开号:FR3034553A1
申请号:FR1500636
申请日:2015-03-30
公开日:2016-10-07
发明作者:Alain Simon
申请人:Thales SA;
IPC主号:
专利说明:

[0001] 1 FIELD OF DETERMINING A DIRECTION OF AN OBJECT FROM AN IMAGE OF THE OBJECT The field of the invention is that of determining the absolute direction of an object from a geographical position from which is acquired an optronic image of it.
[0002] The determination of absolute geographical directions by means of an optronic imaging device which does not have an orientation device enabling a direction measurement of quality compatible with that sought constitutes a technological challenge. Indeed, the systems that focus on solving this problem generally use orientation measurement components whose cost remains high to achieve the desired performance that can be of the milli-radian class. For example, an angular performance of the 1 milli-radian class at 90% is necessary to contain a location error in the TLE1 category (6m to 90%) on an object situated at 5km from the sensor. Orientation measuring devices of this class are uncommon, expensive and of mass too important to be considered in a portable device. The following solutions for attitude measurement are not suitable for the following reasons: - Magnetic compasses are not very efficient (class 10 milli-radians), difficult to integrate, very sensitive to the EM environment, use declination magnetic local (also poorly known in the class 10 milliradians) to transform the magnetic azimuth in azimuth or geographical direction, their cost is relatively low but can reach 1000E. - The FOG (acronym for the expression Fiber Optic 25 Gyrometer), laser gyrometers (RLG), hemispherical resonator gyrometers (HRG), are bulky, heavy, high power consumption and cost, - MEMS are not not sufficiently powerful (some milliradian), have a low maturity and require a calibration procedure that can be long and complex, - celestial objects allow a high performance but are not always visible (difficulty to see the stars day, or the 2) Positioning systems such as GNSS (Global Navigation Satellite System), are moderately efficient for the bases of length envisaged, and their volume, mass and their consumption are not compatible with a portable device, - The use of bitter often extracted data type ortho-image (in aim equivalent to a view ve rticale) or map, is not easy to implement when using an image of opportunity (especially when it is in a small field and aiming grazing) since: 10 o this approach requires first of all to have from the vertical view with the right level of detail, where the probability of being able to match a bitter point to a detail present in the image is reduced quadratically with the field of the image. O the probability of being able to associate several bitters in an image decreases more linearly with their number. - The technique based on the polarization of the sky, recent bioinspired technique of the orientation of insects for their navigation, presents weak performances.
[0003] The object of the invention is to overcome these disadvantages. The method according to the invention proposes a powerful mode of estimation of orientations of images and directions of objects of the class 1 mrad, on a optronic system of portable image acquisition by exploiting: a phase of baptism ground (PBT) which consists in learning and archiving information on the environment of the imaged scene, in the form of signatures characterizing details extracted from the images, in the frequency band or bands of the optronic system, these signatures being furthermore associated with their direction in a geographical reference, an online operation phase (POL), which consists in exploiting the archived information in the learning phase to determine in real time the geographical direction and possibly the location of objects within a newly acquired image whose signatures are extracted to be compared to the archived signatures.
[0004] In the following: we designate by direction (geographical) of an object of a scene, the vector joining the system to the object; this direction is characterized by its elevation (angle relative to the plane perpendicular to the local vertical) and its geographical azimuth (angle between the geographical North 10 and the projection of the direction to the object in the horizontal plane); by orientation or attitude of an image, the information is used to orient the image completely in a reference frame covering the 3 dimensions of the geographical space (for example in a minimal way with the 3 angles of Euler roll, pitching and lace). Moreover, the determination of a direction corresponding to a pixel of an image depends on its image coordinates and is done using a parametric geometric model of the optronic system. The parameters of the model depend on the position of the system and the orientation of the image 20 as well as internal parameters (such as the focal length or distortion of the optronic sensor). More specifically, the subject of the invention is a method of determining the direction of an object of a scene in an optronic image, with a predetermined desired performance. It is mainly characterized in that it comprises a phase of "baptism field" or PBT and a phase of online operation or POL. The "field baptism" phase comprises the following steps: - acquisition by circular scanning by means of a first ophthalmic imaging device 30 of a certain position, of a series of partially overlapping optronic images, including one or more images of the scene (step A1), - automatic extraction in the images, descriptors defined by their image coordinates and their radiometric characteristics, 3034553 4 with at least one unknown direction descriptor in each image recovery (step BI), - to from the descriptors of the overlays, automatic estimation of the relative rotation of the images between them and matching of the descriptors of the overlays (step CI), - identification in the images, of at least one known absolute directional reference of compatible precision of the desired performance in a local geographical reference, and determination of the image coordinates of each reference (Step 10D1), from the descriptors of the mapped recoveries, the direction and the image coordinates of each reference, automatic estimation of the attitude of each image and the position of the first imaging device, so-called step of fine registration (step E1), from the attitude of each image, the position and internal parameters of the first predetermined imaging device, and the image coordinates of each descriptor, calculation of the absolute directions of the descriptors according to a predetermined model of shooting which models in parametric form the imaging physics of the imaging device (step FI), that is to say the geometric path of the photons of the scene on a pixel of the detector . The online operation phase comprises the following steps: Acquisition of at least one image of the object whose direction is sought to be determined, said current image, from a second determined position imaging device , (step A2), extraction of descriptors in each current image (step B2), mapping of the descriptors of each current image with the descriptors whose absolute direction was calculated during the "field baptism" phase, to determine the absolute direction of the descriptors of each current image (step C2), - from the absolute directions of the descriptors of each current image, estimation of the attitude of each current image and 3034553 5 possible internal parameters such as the focal length and / or the distortion the second imaging device (step D2), from the image coordinates of the object in each current image, the attitude of each current image, the position and the parameter. very predetermined internals of the second imaging device, calculating the absolute direction of the object according to a predetermined model of shooting each image of the object (step E2).
[0005] This method, which could be described as an odometric compass, thus implements a preliminary learning phase by characterizing the environment of the imaging device and then a phase of real-time operation that uses the information learned to determine the absolute directions of images and deduce those of the objects present in the images.
[0006] The learning proceeds from a so-called "field baptism" phase which consists in acquiring overlapping images on all or part of the overview and in learning the environment by extracting and constructing a compressed information which characterizes its content in the fields. frequency bands of the imaging device.
[0007] The exploitation of a current image then makes it possible to determine instantaneous geographical directions of objects present in these images. It is implemented in accordance with the following conditions of use: on a portable optronic system, possibly allowing the possibility of using a light tripod-type physical medium, in an environment which does not necessarily have reception of GNSS signals or, in an equivalent manner, on a system not necessarily carrying a GNSS receiver (of the GPS, Glonass or Galileo type, for example), without orientation means or with means of low cost (<100E), low mass (<100g), low quality (class 10 mrad), therefore without gyro, without quality inertial instruments (UMI, CNI), without goniometer ..., - optionally without moving the optronic system 35 longitudinally or vertically, 3034553 6 without particular knowledge of the object to be located, in particular of type geographical coordinates or dimensions, without the system being able to exchange information with the object (in particular of type collaboration), 5 - without knowledge in the scene area corresponding to the acquired image of the object, in particular of bitter type, dimensions ..... The desired performance is typically: - in azimuth in the range from 0.5 to 2 mrad for PBT and 10 POL, in elevation, o with inclinometer accessible in POL, better than 20 mrad in PBT and around 1 mrad in POL, o without inclinometer accessible in POL, from 1 to 2 mrad in 15 PBT and POL. Thus, since a relatively light inclinometer-type elevation measurement equipment is available, the difficulty lies essentially in rendering in real time a direction of the class of the mrad in azimuth, knowing that the traditional SWaP systems based on Compass 20 are more of the class 10 mrad. The focal length of the first imaging device may be different from the focal length of the second imaging device. According to one characteristic of the invention, the internal parameters whose focal length of the first imaging device are estimated during the fine-adjustment step and / or the internal parameters, the focal length of the second imaging device of the phase online, are estimated during the step of estimating the attitude of the current image. Preferably, the first and second imaging devices are the same imaging device. According to one characteristic of the invention, the descriptors of the "field baptism" phase are archived in a database with their radiometric characteristics and their absolute directions.
[0008] A spatial descriptor distribution map may be constructed prior to the online operation phase.
[0009] 3034553 7 The series of images acquired during the "baptismal ground" phase advantageously covers a complete horizon tour. The series of images acquired during the "baptismal field" phase may cover a portion of the complete horizon tour; at least two references are then identified in the images. The position of the first imaging device is determined by positioning means equipping said device or is estimated from several references. Similarly for the position of the second imaging device.
[0010] The method may include a step of constructing a panoramic image from the finely-rescaled images, each pixel of the panoramic image being associated with an absolute direction. The images acquired are, for example, video images. Each reference is typically a terrestrial bitter or a celestial object. The first and second imaging devices are mounted on board a fixed position platform or on board a mobile platform of known trajectory such as a land or naval vehicle or an aircraft. (BRIEF DESCRIPTION OF THE FIGURES) Other features and advantages of the invention will become apparent on reading the detailed description which follows, given by way of non-limiting example and with reference to the appended drawings, in which: FIG. 1 represents a flowchart the main steps of the method according to the invention, FIG. 2 diagrammatically represents an example of a panorama to be scanned during the field baptism phase, FIG. 3a schematically represents an example of images acquired by scanning the panorama of FIG. 2, and FIG. 3b, these 30 images on which references of known direction are indicated, FIG. 4 illustrates the acquisition of images according to 3 average elevations and forming 3 bands, with on the abscissa the deposit and on the ordinate the elevation, the figures 5 schematically in top view, different ways of acquiring the image information on a round-up, in the form of a video sequence. continues with a strong overlap between the images (FIG. 5a), by image-by-image acquisition with adapted and controlled overlap during the acquisition (FIG. 5b) and according to a mixed mode that agglomerates a continuous sequence acquired in the first place, with in a second time some 5 scattered images acquired one by one on the round without needing overlap between them but with those acquired in the first place (Figure 5c), Figure 6 illustrates an example of influence in the scene and recovery of images acquired by a circular scan in the azimuth directions according to λ and the elevation directions according to cp, without coverage 10 of a complete horizon turn, FIG. 7 illustrates a way of completing the spatial coverage and the information developed in PBT during FIG. 8 schematically represents an example of acquisition from an aircraft.
[0011] From one figure to another, the same elements are identified by the same references. The invention is based on a learning of scene content by image processing and on the use of this information to determine directions of objects present in a scene image with good absolute accuracy and fast. Once its direction determined, the object can possibly be located in the environment. The method according to the invention can be implemented on terrestrial cameras that do not require internal positioning means (GNSS receiver), attitude measuring device (UMI, magnetic compass, gyro) or even installation means. (tripod) or range finder. One of the technical problems to be solved which underlies these phases of learning then of calculating directions of objects, is to orient the acquired images while respecting the following conditions of use: on a portable optronic system, possibly allowing the use of lightweight tripod-type physical media, - in an environment that does not necessarily have GNSS signal reception or, equivalently, on a 3034553 system without a GNSS receiver (GPS type) , Glonass, Galileo for example), - without orientation means and therefore without a gyrometer, or with means of low cost (<100E), low mass (<100g), low quality (class 10 mrad), 5 without inertial instruments of quality (UMI, CNI), without goniometer ..., - possibly without moving the optronic system longitudinally or vertically, - without particular knowledge of the object to be located, in particular of geo-coordinate type graphics or dimensions, 10 - without the system being able to exchange information with the object, - without knowledge in the scene area corresponding to the acquired image of the object, in particular of bitter type, dimensions ... ..
[0012] The method for determining the direction of an object of a scene from the acquisition position of an optronic image, with a predetermined desired performance, is described in connection with FIG. 1. It is implemented by means of an optronic system equipped with: - an optronic image acquisition device (or imaging device) in the visible or IR range such as a camera or binoculars, with predetermined internal parameters (focal length and possibly field of view (FoV), main image point, parameters describing the radial and tangential optical distortion, not photosensitive cells in both directions image), of known position and which can therefore be provided with a device GNSS receiver, GLONASS or other type of positioning, but it will be seen later that without such a device, the position can nevertheless be known, and - a processing unit acquired images.
[0013] The method mainly comprises two phases: a so-called "field baptism" learning phase and an on-line operation phase. The "field baptism" phase comprises the following steps: A1) acquisition of a series of partially overlapping optronic images, including one or more images of the scene in which the object is to be determined, the direction of which will be determined the next phase will be a priori. B1) Automatic image extraction of descriptors of interest with at least one unknown direction descriptor in each image overlap. Cl) From the descriptors of the recoveries, estimation of the relative rotation of the images between them, and mapping of the descriptors of the recoveries, from one image to the other neighboring image. D1) Identification in the images of at least one known absolute directional reference 10, and determination of the image coordinates of each reference. El) From the descriptors of the mapped overlays, the direction and the image coordinates of each reference, estimation of the attitude of each image, the position of the imaging device and possibly estimation of its internal parameters whose focal. FI) From the attitude of each image, the position and the internal parameters of the first imaging device, and the image coordinates of each descriptor, calculation of the absolute directions of these descriptors. We will detail these steps. Al) automatic acquisition (from a platform equipped with a "Pan and Tilt" mechanism, a mechanism for programming the orientation of the acquisitions in specific directions relative to the platform and which makes it possible to orient a system optionally automatically programmable, or from an aircraft), quasi-automatic (video) or image to image by an operator, by scanning the scene 1 according to a closed figure that can be circular, an example of which is 30, shown in FIG. 2, by means of a first optronic imaging device of given position, of a series of partially overlapping optronic images 2 shown in FIG. 3a, including one or more images of the scene (generally smaller than the scene). 1) in which the object whose direction will be determined during the next phase, will be a priori.
[0014] The acquisition is performed in a visible or IR channel, with a specific vision field of the device. The overlap 21 of an image in the neighboring image is preferably between 30% and 60%; it can vary from one image to another as can be seen in Figure 3b. Preferably, the field of vision covered by all these images is that of a complete horizon round as is the case of FIG. 3b and FIGS. 7 and 8. have a loop closure that is to say a recovery between an already acquired image (the first for example but not necessarily) and the last one (for example but not necessarily in the sense that the penultimate one would do everything as much as the case). This loop closure is performed: on a complete circle of horizon with a single sweep in elevation (one obtains 1 strip), so as to obtain a recovery in the bearing, on a portion of the complete turn with several sweeps staged in 15 elevations according to different bands (for each elevation, a strip is obtained by scanning) by a rectangular or elliptical type of movement of the line of sight (LdV) of the first imaging device, so as to obtain a coating in the bearing and in elevation of strips respectively corresponding to the scans, as can be seen in FIGS. 4 and 6, by combining the two previous approaches and making several rounds at the same average elevation or at different average elevations as in FIG. situations where the realization of an acquisition on a complete turn 25 is not accessible, it is limited to a scan with movements of the LdV to diff erent elevations, in the form of ellipses or '8' for example. This type of observation is insufficient to correct certain internal parameters, for example the focal length, in the absence of GCP (ground control point) in the form of terrestrial bitter or celestial objects, but allows to refine values of some observable quantities such as angular drift for example. In order to refine a focal value of the imaging device in such a situation, two reference directions will advantageously be exploited in the sequence of acquired images.
[0015] 3034553 12 The wavelengths corresponding to the images acquired in PBT can be in different spectral bands with: - a visible day sensitivity color, Near Infra-Red (PIR), - a sensitivity of day and night in bands SWIR (Small Wave), 5 MWIR (medium wave) or LWIR (Long Wave). Several modes of acquiring the images and processing the images acquired can be envisaged. The acquisition, which can be manual or automatic (Pan & Tilt, or more generally carried out by means of a platform-mounted optronic system with or without servo-control) can be carried out according to the following modes: o (MAV) a video acquisition mode which has a high-speed acquisition capability (ex 10 to 100Hz), schematized figure 15 5a o (MAI) a frame-by-frame acquisition mode which allows the acquisition of images one by one with a longer acquisition time (for example, from 0.1 to 1 Hz) as shown in FIG. 5b. Acquisition triggering may be manual or programmed, particularly on a system using a Pan & Tilt platform; o (MAM) a mixed acquisition mode which constructs the image information at the input of the processes, by inserting in a sequence acquired in MAV, images acquired in MAI (see Fig. 5c). The interest of this approach is described a little further. For processing, different implementation options may be used: o (MTB) a Batch Processing Method processes the information by accessing all the stored or archived batch images; o (MTD) a Dynamic Processing Method, performs the on-the-fly processing during the acquisition of the images with the need to access simultaneously only at most 2 or 3 images at a given moment; (MTS) A Segment Processing Method or piece of the video processes one after the other of the angular sectors such as angular portions in azimuth (for example 1/4 or 1/2 parts of an overview ) or in elevation (for example 5 band assembly). For acquisition of the images, when the first device has a high rate acquisition capability (MAV with a rate generally of 10 Hz), the acquired images are stored at the video rate on the roundup. The recoveries between images being a priori superabundant, a step of the method makes it possible to determine the images of this video to retain. For this purpose, for example, an algorithm of Kanade-Lucas type "An Iterative Image Registration Technique with an Application to Stereo Vision 1981", powered by Tomasi points "Good Features to Track 1994", which estimates the translations between images; a decimation of the video according to the calculated recoveries, the FOV (Field Of View) of the imaging device and the objective recovery between images.
[0016] When the acquisition system has an inclinometer but its elevation measurements can not be synchronized to the dates of the different image acquisitions - or more broadly to other auxiliary data images (DAI) which may be off-line. these acquisition dates, approximate measurements making it possible to know the position of the system, all or part of the orientation of the images or approximate parameters of the imaging device as an approximate focal length - the process can be conducted in 2 passes: the first pass is made as in video mode (MAV), without recording the DAI with the corresponding images, the second pass is done in image-by-image acquisition mode by recording the DAI with the corresponding images. This pusse does not require recovery of acquired images (see Fig 7c).
[0017] More specifically, a sampling of the roundabout is performed in order to arrange in separate azimuths several images and auxiliary data corresponding images such as elevation measurements; typically less than 10 images (and corresponding measurements) make it possible to have sufficient information.
[0018] The objective is to have the recoveries of these images with those of the previous pass and to maintain the LdV of the optronic system for a sufficient duration in order to have a synchronous elevation measurement for each acquisition. The images of this pass are then systematically inserted to that of the first pass to constitute the sequence of input images for the processing of the following steps with the characteristic of having a consistent quality rise of the precision lens. .
[0019] In the case of image-by-image acquisition (MAI), the operator must take some precautions to ensure image recovery. In the case of video mode acquisition (MAV), the recoveries are often important and the method is preferably added to a step of automatically sorting images by eliminating images or descriptors which are too redundant. When the first image acquisition device has several fields and / or zooms, the acquisition of the "field baptism" phase can be performed in a large field so as to reduce the acquisition time but also for possible have a higher probability of kissing bitters on a single image. Conveniently: In MAY mode, the operator controls the orientation of the first image acquisition device and sufficient coverage of the images by moving the image acquisition device and triggering image recordings. by one at each orientation that it retains. In this mode the number M of images to be acquired is of the order 35 Ma / [FOV. (1-1))], where I is the average overlap between images 3034553 expressed in%, FOV is the longitudinal FOV of the image expressed in the same unit as the angle a which represents the horizontal angle scanned during acquisitions. For an acquisition on the complete horizon turn a = 360 °, and with for example a 6 ° lateral field image acquisition device and an overlap between images of 60%, the number of images to be acquired is of M = 150 images. This number of images can be reduced by half if one is satisfied with a vertical coverage of 20% of the field but this last approach does not allow a priori to obtain as many descriptors, or so many "good" 10 descriptors, which may have an impact on the quality of the estimate. In MAV mode, the acquisition is done automatically by angular sector (with one of the 2 previous modes video or manual) and the acquisition is eventually stopped when the memory reaches a certain threshold. At this stage the acquired images are processed to extract the descriptor information. In addition to the descriptors to feed the BDD (Database descriptors), the operator can retain one to a few images to position bitter, the memory being released from other images. In addition to these user-guided acquisitions, the device can also be implemented with a platform having a pan and tilt mechanism or any other mechanism for scheduling acquisition orientation in specific directions relatively at the platform.
[0020] For image processing, the practical choice of one of the processing methods is conditioned by the available memory and computing capacity (CPU). When the memory (in comparison with the size of the video) and the computing capacity of the processor (CPU compared to the time acceptable to the user) allow it, the batch processing method (MTB) is recommended in the to the extent that it allows the simultaneous management of all information (including multiple overlays) and offers a better consistency check of the estimated parameters. In the case of insufficient memory or CPU dynamic processing will treat the data extracted from the images one after the other.
[0021] The choice of the treatment method directly impacts the estimation technique retained in step E1 (FIG. 1). MTB suggests a least-squares type approach Gaus-Newton or LevenbergMarquard; while the dynamic processing method (BAT) directs to a Extended Kalman Filter (EKF for Extended Kalman Filter) or UKF (for Unsected Kalman Filter). When the first image acquisition device (or imaging device) has a memory that is too weak to store all the images of an overview, the acquisition is processed: either with MTB but by gradually releasing the memory of the images and storing the extracted descriptor information, either with MTD method or the segment processing method (MTS).
[0022] At the end of this step the system has a sequence of images with adequate recovery and on which the processing of the following steps will be performed. Preferably, regardless of the image acquisition mode, a reference image is chosen from among these images.
[0023] B1) automatic extraction in the images, of descriptors of interest defined by their image coordinates and their radiometric characteristics, with at least one unknown direction descriptor in each image covering 21 (a descriptor is sufficient if one has available an inclinometer elevation measurement, for example, if not, provide at least two descriptors). Descriptors extracted in non-overlapping image portions are also exploited since once the parameters of the shooting model have been characterized, they can benefit from a quality orientation that can be used in the online exploitation phase. .
[0024] The operator can also manually define descriptors by designating details and their correspondences in images. This information can also be used to: - orient the images relatively to each other in the subsequent step C1, then in absolute terms in the subsequent step E1 in PBT, determine the orientation of a POL image as soon as the details designated have a sufficiently characteristic radiometric signature. The descriptors detected in the images are as non-limiting examples of nature: SIFT stands for Scale Invariant Features Translation. In this case they are key points characterized by an information vector describing the histogram of the gradients around the pixel in question. This step is typically performed as initially described by Lowe 2001. SURF stands for Speeded Up Robust Features. Like SIFT, this approach locates (primitive) details in images and characterizes them in a faster alternative than the SIFT approach.
[0025] 15 FREAK acronym for the English expression Fast Retina Keypoint (Alahi et al IEEE 2012), Haris Points and moments images. In practice, the descriptor extraction algorithm is configured to ensure that: The number of extracted descriptors is satisfactory for the application (for example at least 2 per coverage area). This feature may be particularly difficult to verify in areas with little detail, due to the constitution of the scene, or particular illumination of the detector. For this we act mainly at the specific parameters of the descriptor extraction algorithm (threshold, level of pyramidal processing, ...). The spatial density of the descriptors is not too important. In this situation one increases unnecessarily the size of the systems to be estimated in the following and on the other hand increases the risk of mis-associating descriptors. In practice, the selection algorithm will eliminate descriptors corresponding to angularly too close directions with respect to the FOV of the imaging device.
[0026] Some of these descriptors are known to be more or less robust to changes between images: - scale (or zoom variation), - orientation (relative rotation from one image to another), 5 - translation. Whatever the algorithm used, a descriptor is associated with a pixel corresponding to a detail of the scene that has a specific signature relative to its neighborhood in the spectral band of the image acquisition device 10. In addition to the freedom of scale, by choosing a zoom and / or a specific field of the acquisition device, the acquisition can be chosen in a specific spectral band if the first acquisition device has several channels (example IR / VIS). In addition to the compromise field / number of images already mentioned, the interest of the operator is to choose the path with the best contrasts. In the case of a nocturnal use the choice is obviously limited to IR or active channels that can have the acquisition device.
[0027] 20 Cl) from the descriptors extracted from the recoveries, mapped (MEC) automatic (also referred to as pairings) descriptors recoveries, from one image to another neighboring image and automatic estimation of the relative rotation of images between them possibly through the reference image. This step is often referred to as coarse or rough registration. Detecting details of the scene giving rise to possible multiple overlays (more than 2 images) can be performed in a subsequent phase after a first relative orientation between images has been performed; this to guide the search for descriptors that can be linked to more than 2 images.
[0028] This estimation of the orientation and the pairings between descriptors can be carried out simultaneously by proceeding for example in the following manner known to those skilled in the art: a. calculating a first relative transformation with a minimum number of 2 MEC with a TRIAD algorithm, 3034553 19 b. estimation of the 'good' MECs (inliers) with a RANSAC (or PROSAC) algorithm (RANdom Consensus Consciousness, and PROgressive SAmple Consensus), in order to reject outlier MECs 5 (outliers) between images, c . estimation of an optimal transformation, on the basis of all the good mappings (inliers), with an algorithm of the type "q-method" or "QUEST" (QUaternion ESTimator) or "SVD method" or Gauss-Newton by Example. D1) Identification in the images automatically or by an operator of at least one known absolute directional reference 22 as shown in FIG. 3b, such as a terrestrial bitter or a celestial object, of compatible precision of the desired performance, and automatic determination or by the operator of the image coordinates of each reference. This step is intended to associate the image coordinates with the geographical or spatial direction (azimuth, elevation) of the references used. in an automatic procedure, for example, an image associated with a reference datum can be correlated automatically with image zones around the descriptors of the PBT. It should be noted that this approach makes it necessary to have images associated with the references in CPDVs close to those made in PBT. For this purpose, an approximate absolute orientation of the PBT images by means of a magnetic compass for example can facilitate the task by greatly reducing the pairing combinatorics, in a non-automatic approach it is possible to envisage: o a specific semi-automatic mode , where the operator points the reference to the image center and makes specific measurements (angular with inclinometer and magnetic compass for example and potentially distance with a laser rangefinder harmonized at the center image) o a manual pointing mode where the operator designates in an image the reference so as to associate its image coordinates with its spatial direction. When the reference is a terrestrial bitter, it is easy to determine the characteristics of its direction (azimuth and elevation) from the position of the camera. The accuracy of the direction is then a function of the accuracy of the coordinates of the bitter, those of the camera position, the designation accuracy of the bitter and the distance between bitter and camera. When the reference is a celestial bitter, the body may for example be centered on the optical axis and then its direction is determined from the camera position, a UTC date (for example available on GPS) and ephemeris of celestial body or astrometric catalog. The error on the direction then depends on the quality on these magnitudes in azimuth and elevation with for the elevation a complementary contribution of atmospheric refractive correction residues. When sweeping has covered a complete horizon turn, a single reference may suffice; but when the scan has covered a portion of the complete horizon turn, at least two references are to be identified in the 20 images. In practice it suffices to write the equations of the model of shooting which connect the vector of the space joining the position of the sensor (x0, y0, z0) to the position (xn, yn, zn) of the reference of the scene, and its position in the image characterized by its coordinates. The model integrates: the internal parameters characterizing the specificity of the geometrical properties within the imaging device, the external parameters fixed according to the attitude of the image or the imaging device and its spatial position.
[0029] In this step, image orientation parameters have been estimated in an approximate manner. The raw estimate of these parameters will feed in initial values the next step which will make the final estimation of their values.
[0030] 3034553 21 El) From the descriptors of the mapped overlays, the direction and the image coordinates of each reference, automatic estimation of the attitude of each image, the position of the first imaging device and possibly estimation of its 5 internal parameters whose focal length is used during this PBT phase. Although the internal parameters are predetermined, they may be known with insufficient precision (i.e. incompatible with the final directional quality objective as shown below); this step, often referred to as late registration, makes it possible to define them more finely.
[0031] The need for quality of the internal parameter constituting the focal length is illustrated by a numerical example. For this, consider a matrix detector of size w = 1000 pixels and an optical conferring an FOV of 100. The focal length of the imaging device is f = w / (2 tan (F0V / 2)) is a focal length of 5715 pixels for an average pixel size (or IFOV) of 175 prad. If we assume the initial focal length to within 1% - a value that is within the traditional uncertainty range on this magnitude - this corresponds to an error (of the image-to-image on / sub-zoom type) of 5%. approximately pixels corresponding to an angular difference of 0.9 mrad, ie an image-to-image error of about 1 mrad (of the order of the overall desired performance) but which after a few images, would be quickly incompatible with the final class of directional quality sought (the effect of zooming error accumulating). This simple calculation indicates the importance that the proposed process is able to re-estimate the internal parameter that is the focal length of the imaging device.
[0032] Different approaches can be used for this step, among which are: BA (Bundle Adjustment in English) or Beam Adjustment to readjust in a coherent manner all the parameters of shooting of the images and the characteristics of the observations (here 30 MEC descriptors). - PNP acronym for the Anglo-Saxon Perspective N Points, including the procedure of bearing or P3P based on 3 imaginary points of geographical coordinates, - P2PA, which is an active P2P based on the assumption that the position 35 of the device The imaging is fixed and known, and the scanning is circular, with beam adjustment. According to the user's need in terms of application and control of the proper functioning of the automatic algorithms, a step can be provided: constructing and displaying a panoramic image from the finely-rescaled images, each pixel of the panoramic image being associated with an absolute direction, displaying information associated with the descriptor information and the description card spatial descriptors (CDSD). Generally, distance-type observations can be acquired at the level of an optronic system equipped with a range finder harmonized with the line of sight (LdV) of the system, for example with a portable system on which the user can manually orient the LdV on a landscape detail and telemetry it. This detail corresponds either to a descriptor (geographical coordinates and direction initially unknown) or to a bitter one (geographical coordinates known a priori) and the remote observation is then useful in the estimation procedure (BA or 20 PNP) implemented during of this step El. FI) From the attitude of each image, the position and the possibly more accurate internal parameters of the first imaging device, and the image coordinates of each descriptor (the descriptors of the overlays and the others). ), automatic calculation of the absolute directions of these descriptors according to the geometric model of shooting of the imaging device. These descriptors are archived in a database (BDD) with their radiometric characteristics and their absolute directions.
[0033] This archiving is preferably carried out so as to facilitate the search for matching of POL. For this, the descriptors are ordered, in particular in azimuth, to use the arrangement of their values with a geometric hashing technique in the on-line matching step, in particular when an approximate measurement of azimuth is available (for example by The use of magnetic compass).
[0034] A descriptor spatial distribution map (CDSD) may be constructed which contains cells corresponding to solid angles or spatial zones. These cells are determined and positioned in azimuth and elevation in a horizontal and vertical pitch chosen by the process (these angular steps are generally finer but of the order of the FOV of the imaging device). Each of the cells indicates the number of descriptors and / or directions (those of the descriptors and references) found in this solid angle: no descriptor 10 o if the area is not covered by any image, o if the content of the images on the This zone does not give rise to the creation of any descriptor in the cell in question, unpaired descriptors because they come from parts of images that have no overlap, descriptors are paired with their order of multiplicity, the same descriptor can be associated. with more than 2 images if the overlap between images is greater than 50%; the recoveries take place in azimuth as well as possibly in elevation.
[0035] In cases where the number of descriptors is very dense and the scene is presented on a large variation of elevation (for example for high-relief areas, stars background, etc.), the CDSD is preferably constructed. in the form of regular surface cells. To do this, it is recommended to use a representation of the type HEALPIX (Hierarchical Equal Area IsoLatitude Pixelization) - see for example "HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere" 2005. The CDSD can be synthesized in binary form: either to present zones having no descriptors, or to present areas having a number of descriptors greater than a given value. The CDSD can be constructed in a relative reference frame when the directions of the descriptors and references are marked with respect to a reference associated with a reference image, when the directions are oriented from magnetic measurements, for example definitive absolute when the directions are oriented in an estimated reference after a beam adjustment at the end of the field baptism phase with a compatible quality of the objective. When a magnetic compass is available, for example, the directions can immediately be pre-located in the correct cell (with a precision better than a degree). For this, the CDSD cell corresponding to the direction of the descriptor considered is determined by truncating or interpolating the descriptor direction to bring it closer to the center of a particular cell. Once all the descriptors of all images are assigned to the CDSD cells, and after the beam adjustment phase, each direction is repositioned with a quality inheriting from the reference direction (s) and the CDSD is adjusted from the relative or approximate coordinate system to the end absolute coordinate. A CDSD in the form of a table is thus available, one cell of which corresponds to a solid angle around the imaging device and which contains the number of descriptors extracted from the set of images (overlays included).
[0036] The CDSD can be filtered so as to contain for each space cell only a number of descriptors determined in order to accelerate the exploitation for the online phase. However, it is more efficient to filter the descriptors in step B1.
[0037] The elimination of descriptors in a cell can in particular be carried out according to the following criteria: spacing or proximity of the descriptors in the cell, radiometric intensity of the signature of the descriptor, quality of the associated direction as soon as it has been filled in 30 with information from a preliminary orientation phase. CDSD can initially be used in the field baptism phase to: - determine the area of space over which images are acquired; To determine the area of non-coverage in descriptors on the volume swept during this baptism phase, to warn the operator so that he can possibly reacquire these zones with new images if he considers them relevant; Filtering descriptors on areas where they are too numerous and therefore redundant in terms of geometric information input and would be 'similar' in terms of the signature of their radiometric signals.
[0038] In general, the position of the first imaging device is determined by positioning means equipping said device; it can also be estimated from several references.
[0039] Since this field baptism phase (PBT) is performed, the operational phase of determination of direction or in-line operation phase (POL) can be approached. It comprises the following steps: A2) Automatic or operator acquisition of the image (possibly of several images) of the object whose direction is sought to be determined, said current image 20 shown in FIG. 7, starting from a second determined position imaging device, which is preferably the same as for the previous phase but which may be different; its position may be the same as in PBT phase especially when it is fixed. In the case of a moving platform, the management detail of the directions is explained below. Note that the object whose direction must be determined in POL, may possibly be absent in the images 2 of the PBT, because absent from the scene in PBT (the object being for example a moving character or a movable vehicle) . The descriptors present in the environment of the object must a priori be in sufficient number and have 'good characteristics' so that by combining them with the robust procedure of MEC the presence of a new object in the image in POL does not disturb the MEC descriptors of the current image to those of the PBT as we will see later. Thus, the attitude of the POL image can be estimated by the presence of certain changes in the scene between the times of PBT and POL. Images 2 that do not contain the object to be orientated can also be acquired during this POL phase as shown in FIG. 7.
[0040] When one of these images 2 overlaps with images of PBT and another has an overlap with an image 20, then: image 20 is processed as in the case of a single image of POL, previously the set of images 2 (other than the current image 20) is treated as in PBT to build a 'bridge' between the image 10 of the object and the existing BDD. Their processing makes it possible to increase the BDD in descriptors and the CDSD in spatial coverage. The BDD and the CDSD can thus be enriched in the course of different POL having for specificity to have a complementary cover to the current CDSD. Enrichment is achieved after refinement of the 15 directions of all old and new elements (descriptors and images). B2) Automatic extraction of descriptors in each current image 20.
[0041] C2) Automatic mapping of the descriptors of each current image to the descriptors whose absolute direction was calculated during the "field baptism" phase, to determine the absolute direction of the descriptors of each current image.
[0042] These paired descriptors of each current image are preferably associated with those of the descriptor database. If after extracting the descriptors their number or quality is judged to be insufficient or if the image containing the object is in an area where CDSD deserves densification, then a local beam adjustment can be made to refine the descriptor directions for enrich the descriptor database with the best information and update the CDSD.
[0043] Several information can be used to facilitate the search for MEC between POL descriptors and those of the PBT BDD. By designating by f1 (in PBT) and f2 (in POL) the focal length / zoom of the imaging devices and by n1 (in PBT) and n2 (in POL) two scale levels internal to the extraction processes of multi-scale information, one can exploit: At the level of the descriptor radiometry level information: Search of MEC at the good scale level fi. 2fh = f2. 2112. In POL the scale level n2 to be used to attempt to associate a descriptor of the PBT (of scale level n1) is deduced from the approximate focal lengths in POL (f2) and PBT (f1). At the level of geometric information, we do not usually solve a problem as if we were lost in space lost in space since we usually have an approximate orientation of the image that we must typically to improve by a factor of 10 to 30. This is to determine directions of pixels corresponding to objects of the image with the required quality. Thus starting from the approximate direction of the LdV (or the approximate orientation of the image) in POL and the associated errors, a region or a solid angle is generated in which the correspondences of the BDD will be sought. The two preceding aspects can be exploited jointly or individually, the first alone if there is no approximate orientation of the image, the second only being able to be acceptable to the extent that the focal lengths of the PBT acquisition devices and POL are of the same order. D2) From the absolute directions of the descriptors of each current image 20, automatic estimation of the attitude of each current image 20 and possibly of the internal parameters of the second imaging device 30 whose focal length. E2) From the image coordinates of the object in each current image 20, the attitude of each current image 20, the position and the internal parameters (possibly more accurate) of the second device 3034553 28 imaging, automatic calculation of the absolute direction of the object according to a predetermined pattern of shooting of each current image 20. The CDSD constructed during the baptismal field can be used in line: - in order to evaluate in a determined direction (object pointing on coordinates) if the BDD is consistent on this neighborhood, and even before having made a current image, to possibly provide a working field of view as soon as the imaging device has several fields of vision. In the case, for example, of a small-field PBT (PC), it is advisable to acquire a current image at the high end of the useful area, in a large field (GC) so as to guarantee descriptors in the low half of the image.
[0044] Both CDSD and BDD can be enriched online. For example, when the FOV of a current image extends beyond the current characterized area, the descriptors extracted beyond the characterized area enrich the DB as soon as the current image can be oriented in absolute terms. after pairing some of its descriptors with others known to the BBD. In general, the position of the second imaging device is determined by positioning means equipping said device; it can also be estimated from several references of the PBT phase. The optronic system considered may be a portable optronic camera provided with one or more channels allowing night vision and / or day vision. It includes memory and calculation unit means and appropriate interfaces to implement the method in order to facilitate: In PBT: data capture and acquisition, presentation of intermediate results (quality of estimation, statistics on the MEC, reconstructed image band), and the characteristics and distribution of 3034553 29 (CDSD) descriptors, image attitudes, focal length or other internal parameter estimated. In POL: control elements such as the characteristics of the descriptors present in the current image, their belonging to a spatial zone actually covered by the CDSD with a possible additional need for description of the scene (see FIG. 7), the number of mappings established with the BDD descriptors, the directly exploitable information such as the orientation (or attitude) of the image (or partially the direction of the LDV) and the location of an object in the center image when the latter has been measured remotely (in the case of an optronic system equipped with a rangefinder harmonized with the LdV for example). The first and / or second imaging device may be installed on a fixed platform, on a tripod or on a Pan & Tilt. In the case of an imaging device mounted on a conventional tripod, the operator manually triggers image acquisitions. This case concerns both a portable optronic camera and a mobile phone equipped with appropriate processing units and interfaces to develop the process.
[0045] On a Pan & Tilt platform the acquisition can be programmed with a displacement of the imaging device according to its characteristics (area to be covered and FOV in particular) and the desired overlays between images. In the case of an imaging device mounted on a mobile platform, the implementation of the method assumes a knowledge of the position of the imaging device via that of the platform, even if this can be refined. in step El. Moreover, the reference directions of the descriptors are recorded in the same local geographical reference for all the images of the acquired sequence. On several recent mobile phones, there is an image acquisition mode 30 for constructing a panoramic image from several images that can be acquired on a portion or a complete turn of horizon. A qualified treatment of 'Panoramic Stitching' allows to assemble the individual images in order to present a panoramic overall image. Contrary to the objective of "stiching" which is to have a larger image than that permitted by the FOV of the sensor, the objective of the method according to the invention is to orient an image (a priori limited to the FOV of the sensor) which will be acquired later to pan from a given position and to determine the absolute direction of one of these pixels corresponding in general to an object of interest of the scene. Applied to a mobile phone, the method thus supplements existing algorithms in recent phones for the purpose of constructing a BDD and a CDSD for determining the direction of an object. The first and / or the second imaging device can be installed aboard a mobile platform such as an aircraft 3 as for example illustrated in FIG. 5. In this case, the optronic system equipped with the imaging device is supposed to have an agility that allows him to orient his line of sight in different directions under the platform. The imaging device has a servo mode enabling it to ensure circular acquisition patterns of the images 2 as shown diagrammatically in the figure, whatever the trajectory of the platform, within the limits of its limits. possible masking. The circular acquisition is indicated by the series of 12 images 11 to 12 with overlaps 21 respectively acquired from 12 positions 31. In this situation, the descriptors extracted from the images representing the ground are characterized as previously by their absolute direction or directly by their geographical coordinates if one has a digital terrain model (DTM) of the scene. The advantage of this approach is to naturally establish geographic coordinates in a common coordinate system (eg WGS84). The directions established in each image in the localized geographical locator (PLL) corresponding to the position of the platform at the date of acqusition of the image must be established in the same common RGL (RGLC). For this RGLC it is possible to choose for origin the reference position (PR) corresponding to that observed when acquiring a PBT image, for example the first or the last image. In general, each direction `vm 'in an RGL associated with an image corresponding to the position Pm can be transferred in a direction` va' in the RGL of a position Pn using a linear relation of the form vn = R ( P) RT (Pm) vm. Expression in which the elements R are rotation matrices (3x3) whose elements depend only on the geodesic coordinates (longitude and 3034553 31 latitude) of the image during the acquisition via trigonometric functions and the exponent T ' indicates the transpose of the corresponding rotation. When the positions Pn and Pm are not too far apart, a differential relation can also be used to characterize as elementary angles the deviation of the direction vil with respect to the direction vm. In practice the airborne systems access the kinematic parameters 31 position, speed, acceleration of the platform and its attitude. They generally have their own means of establishing the attitude of the image in a local geographical reference point and therefore in any type of reference system. This information has associated errors and is used to initialize and linearize the nonlinear equations participating in the beam adjustment procedure. At the end of the procedure, the values of the initial parameters are improved and characterized by a covariance. If the system accesses one or more GCPs, then the beam adjustment performance can be significantly improved. To fix ideas, a system where the initial orientations of the images vary from one to a few milli-radians, generates a minimum error of 30 m to 30 km. By elementary reasoning, we can evaluate the effect of this error in azimuth and thus in tangential positioning. It should be noted that a circular error in elevation results in a much larger ground error for oblique sights since the circular error is approximately multiplied by the ratio of the distance (between the system 25 and the point to be located) to the floor system height. Access to a GCM with a quality of 5m to 20km gives access to an orientation quality of 1/4 milli-radian or 4 times better than the initial performance; a GCM of 3m to 30 km will improve the orientation performance by an order of magnitude with a potential angular performance of 1/10 milli-radian! Note finally that the beam adjustment algorithm used for example in the fine registration step, will propagate the benefit of such an absolute reference on the orientation of all overlapping images. On an aircraft-type mobile platform (or even a ship or a vehicle), the following particularities are emphasized: ## EQU1 ## the positions of the different images are substantially different but are known while being able to be the subject-like the orientations of the images-of an estimate in the beam adjustment step. In this implementation, the relative attitude is generally quite well known (a few tens of prad over a short period of time so little maneuvering the platform) which facilitates the MEC descriptors (step Cl) and the initialization of non-linear estimates (step E1). Moreover, the absolute orientation of the images is also quite good and of the order of the mrad (facilitates the step D1); on the other hand, an editing bias (qq mrad) will preferably be modeled in the equations of the shooting model so as to estimate it in the same way as the focal length in step E1. Of course, knowing the direction of the The object can be determined geographically when, for example, the imaging device is equipped with a range finder and / or using methods known to those skilled in the art. This method of determining the absolute direction of an object in an image may in particular be implemented from a computer program product, which computer program comprises code instructions for carrying out the steps of the method . It is recorded on a computer readable medium. The medium may be electronic, magnetic, optical, electromagnetic or an infrared type of diffusion medium. Such supports are, for example, Random Access Memory RAMs (ROMs), magnetic or optical tapes, disks or disks (Compact Disk - Read Only Memory (CD-ROM), Compact Disk - ReadANrite (CD-RAN) and DVD).
[0046] Although the invention has been described in connection with particular embodiments, it is obvious that it is in no way limited thereto and that it includes all the technical equivalents of the means described and their combinations if These are within the scope of the invention. 35
权利要求:
Claims (19)
[0001]
REVENDICATIONS1. Method for determining the direction of an object of a scene (1) in an optronic image, with a predetermined desired performance, characterized in that it comprises a "field baptism" phase and an online operation phase , the "field baptism" phase comprising the following steps: - acquisition by circular scanning by means of a first optronic imaging device of determined position, a series of optronic images (2) partially overlapping, including an image or several images of the scene (step A1), - automatic extraction in the images, of descriptors defined by their image coordinates and their radiometric characteristics, with at least one unknown direction descriptor in each overlap (21) of images (step B1 ), from the descriptors of the recoveries, automatic estimation of the relative rotation of the images between them and mapping of the descriptors of the recoveries ( step C1), - identification in the images, of at least one known absolute directional reference (22) of compatible precision of the desired performance, and determination of the image coordinates of each reference (step D1), - from the descriptors of the recoveries Mapped, the direction and the image coordinates of each reference, automatic estimation of the attitude of each image and the position of the first imaging device, said step of fine registration (step E1), - from the the attitude of each image, the position and the internal parameters of the first predetermined imaging device, and the image coordinates of each descriptor, calculating the absolute directions of the descriptors according to a predetermined model of imaging of the imaging device (step F1), the online operation phase comprising the following steps: 3034553 34 Acquisition of at least one image of the so-called current image object (20), starting from a second determined position imaging device, (step A2), extracting descriptors in each current image (step B2), mapping the descriptors of each current image to the descriptors whose absolute direction was calculated during of the "field baptism" phase, to determine the absolute direction of the descriptors of each current image 10 (step C2), from the absolute directions of the descriptors of each current image, estimation of the attitude of each current image (step D2 From the image coordinates of the object in each current image, the attitude of each current image, the position and predetermined internal parameters of the second imaging device, calculation of the absolute direction of the object. according to a predetermined model of shooting of each current image (step E2). 20
[0002]
2. Automatic method for determining the direction of an object in an image according to the preceding claim, characterized in that the focal length of the first imaging device is different from the focal length of the second imaging device. 25
[0003]
3. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the internal parameters whose focal length of the first imaging device of the PBT phase are estimated during the 30 end adjustment step and / or the internal parameters whose focal length of the second in-line phase imaging device are estimated during the step of estimating the attitude of the object image.
[0004]
An automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the first and second imaging devices are the same imaging device.
[0005]
5. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the descriptors of the "field baptism" phase are stored in a database with their radiometric characteristics. and their absolute directions. 10
[0006]
6. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that a density map of the spatial distribution of the descriptors is constructed before the online operation phase. 15
[0007]
7. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the series of images acquired during the "baptismal field" phase covers a complete horizon tour. . 20
[0008]
8. Automatic method for determining the direction of an object in an image according to one of claims 1 to 6, characterized in that the series of images acquired during the "baptismal ground" phase covers a portion of a tower horizon and in that at least two references are identified in the images. 25
[0009]
9. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the position of the first imaging device is determined by positioning means equipping said device or is estimated 30 from several references.
[0010]
Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the position of the second imaging device is determined by positioning means equipping said device or is estimated at from several references. 3034553 36
[0011]
11. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that it comprises a step of constructing a panoramic image from the images finely recalibrated, and in that that each pixel of the panoramic image is associated with an absolute direction.
[0012]
12. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the series of acquired images are video images. 10
[0013]
13. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the series of images is acquired frame by frame. 15
[0014]
14. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that each acquired image is associated with an elevation.
[0015]
15. Automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that each reference is a terrestrial bitter or a celestial object.
[0016]
An automatic method for determining the direction of an object in an image according to one of the preceding claims, characterized in that the imaging device is mounted on board a fixed position platform.
[0017]
17. Automatic method for determining the direction of an object in an image according to one of claims 1 to 15, characterized in that the imaging device is mounted on board a mobile platform of known trajectory.
[0018]
18. Automatic method for determining the directtion of an object in an image according to the preceding claim, characterized in that the mobile platform is a land or naval vehicle or an aircraft.
[0019]
19. A computer program product, said computer program comprising code instructions for performing the steps of the method of any one of the preceding claims, when said program is run on a computer. 10 15 20 25 30 35
类似技术:
公开号 | 公开日 | 专利标题
EP3278301B1|2021-08-11|Method of determining a direction of an object on the basis of an image of the object
EP2513668B1|2018-01-24|Method for geo-referencing an imaged area
EP2513664B1|2015-04-01|Method for calibrating a measurement instrument of an optronic system
EP1828992B1|2011-03-02|Method for processing images using automatic georeferencing of images derived from a pair of images captured in the same focal plane
US10935381B2|2021-03-02|Star tracker-aided airborne or spacecraft terrestrial landmark navigation system
EP2478334B1|2014-08-06|Three-dimensional location of target land area by merging images captured by two satellite-based sensors
Menozzi et al.2014|Development of vision-aided navigation for a wearable outdoor augmented reality system
WO2018052554A1|2018-03-22|Star tracker-aided airborne or spacecraft terrestrial landmark navigation system
Luetzenburg et al.2021|Evaluation of the Apple iPhone 12 Pro LiDAR for an Application in Geosciences
FR2975484A1|2012-11-23|METHOD FOR REMOTELY DETERMINING AN ABSOLUTE AZIMUT OF A TARGET POINT
EP2513592B1|2015-02-25|Method for designating a target for a weapon having terminal guidance via imaging
EP2932182B1|2021-04-14|Method for accurately geolocating an image sensor installed on board an aircraft
WO2017220537A1|2017-12-28|Method of estimating a direction of absolute orientation of an optronic system
WO2021014294A1|2021-01-28|Method and device for calibrating an inertial unit
FR2981149A1|2013-04-12|Aircraft, has attitude measurement device including optical sensor that captures images of stars, where attitude measurement device measures attitude of aircraft at both day and night from images taken by sensor
Girod2018|Improved measurements of cryospheric processes using advanced photogrammetry
Campos2019|A portable mobile terrestrial system with omnidirectional camera for close range applications
Pan et al.2018|The Application of Image Processing in UAV Reconnaissance Information Mining System
FR2699666A1|1994-06-24|Aircraft navigation equipment with electronic relief map of terrain
ROBOTS0|APPORTS COMBIN ES DE LA VISION OMNIDIRECTIONNELLE ET
FR2737071A1|1997-01-24|Matrix array camera movement system for site surveillance and e.g. being mounted on platform on boat - has random position camera with position picture and coordinate memorisation and display, and is provided with gyroscope
FR2926143A1|2009-07-10|Moving target&#39;s outline and baseline observing and detecting method for e.g. ground combat vehicle, involves locating object with respect to platform from location coordinates provided by positioning unit and triangulation operation
同族专利:
公开号 | 公开日
EP3278301A1|2018-02-07|
US10147201B2|2018-12-04|
ES2885863T3|2021-12-15|
FR3034553B1|2018-04-13|
WO2016156352A1|2016-10-06|
US20180082438A1|2018-03-22|
IL254785D0|2017-12-31|
CA2981382A1|2016-10-06|
EP3278301B1|2021-08-11|
SA517390053B1|2021-04-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
WO2010063844A1|2008-12-05|2010-06-10|Thales|Method for geolocating an object by multitelemetry|
US8600700B2|2009-01-08|2013-12-03|Trimble Navigation Limited|Method and system for measuring angles based on 360 degree images|
US6727954B1|1998-08-12|2004-04-27|Minolta Co., Ltd.|Electronic camera and image processing system|
US7057659B1|1999-07-08|2006-06-06|Olympus Corporation|Image pickup device and image pickup optical system|
US7864159B2|2005-01-12|2011-01-04|Thinkoptics, Inc.|Handheld vision based absolute pointing system|
GB2439230B8|2005-03-08|2013-10-30|Rafael Armament Dev Authority|System and method for field-of-view switching for optical surveillance|
WO2008032739A1|2006-09-12|2008-03-20|Panasonic Corporation|Content imaging device|
FR2912502B1|2007-02-13|2009-03-27|Thales Sa|PROCESS FOR THE REAL-TIME PROCESSING OF TOPOGRAPHIC DATA IN AN AIRCRAFT FOR THEIR DISPLAY|
US8743199B2|2010-03-09|2014-06-03|Physical Optics Corporation|Omnidirectional imaging optics with 360°-seamless telescopic resolution|
US9860443B2|2012-08-20|2018-01-02|The Regents Of The University Of California|Monocentric lens designs and associated imaging systems having wide field of view and high resolution|
US9294672B2|2014-06-20|2016-03-22|Qualcomm Incorporated|Multi-camera system using folded optics free from parallax and tilt artifacts|
EP3195028A1|2015-09-03|2017-07-26|3M Innovative Properties Company|Head-mounted display|US10359518B2|2016-12-30|2019-07-23|DeepMap Inc.|Vector data encoding of high definition map data for autonomous vehicles|
US11190944B2|2017-05-05|2021-11-30|Ball Aerospace & Technologies Corp.|Spectral sensing and allocation using deep machine learning|
CN107491471B|2017-06-19|2020-05-22|天津科技大学|Spark-based large-scale astronomical data sky coverage generation method|
CN110019886A|2017-08-28|2019-07-16|富泰华工业(深圳)有限公司|Full-view image generating means and method|
US11182672B1|2018-10-09|2021-11-23|Ball Aerospace & Technologies Corp.|Optimized focal-plane electronics using vector-enhanced deep learning|
US10879946B1|2018-10-30|2020-12-29|Ball Aerospace & Technologies Corp.|Weak signal processing systems and methods|
WO2020117530A1|2018-12-03|2020-06-11|Ball Aerospace & Technologies Corp.|Star tracker for multiple-mode detection and tracking of dim targets|
法律状态:
2016-02-23| PLFP| Fee payment|Year of fee payment: 2 |
2016-10-07| PLSC| Publication of the preliminary search report|Effective date: 20161007 |
2017-02-27| PLFP| Fee payment|Year of fee payment: 3 |
2018-02-27| PLFP| Fee payment|Year of fee payment: 4 |
2020-02-27| PLFP| Fee payment|Year of fee payment: 6 |
2021-02-25| PLFP| Fee payment|Year of fee payment: 7 |
2022-02-21| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
申请号 | 申请日 | 专利标题
FR1500636|2015-03-30|
FR1500636A|FR3034553B1|2015-03-30|2015-03-30|METHOD FOR DETERMINING A DIRECTION OF AN OBJECT FROM AN IMAGE OF THE OBJECT|FR1500636A| FR3034553B1|2015-03-30|2015-03-30|METHOD FOR DETERMINING A DIRECTION OF AN OBJECT FROM AN IMAGE OF THE OBJECT|
CA2981382A| CA2981382A1|2015-03-30|2016-03-30|Method of determining a direction of an object on the basis of an image of the object|
US15/563,516| US10147201B2|2015-03-30|2016-03-30|Method of determining a direction of an object on the basis of an image of the object|
ES16716504T| ES2885863T3|2015-03-30|2016-03-30|Procedure for determining the direction of an object from an image of it|
PCT/EP2016/056852| WO2016156352A1|2015-03-30|2016-03-30|Method of determining a direction of an object on the basis of an image of the object|
EP16716504.2A| EP3278301B1|2015-03-30|2016-03-30|Method of determining a direction of an object on the basis of an image of the object|
SA517390053A| SA517390053B1|2015-03-30|2017-09-28|Method of determining a direction of an object on the basis of an image of the object|
IL254785A| IL254785D0|2015-03-30|2017-09-28|Method of determining a direction of an object on the basis of an image of the object|
[返回顶部]